Plug-in, Trainable Gate for Streamlining Arbitrary Neural Networks
نویسندگان
چکیده
منابع مشابه
Adaptive Control Schemes Based on Recurrent Trainable Neural Networks
Abstract: The aim of the present paper is to integrate a recurrent neural network in two schemes of real-time soft computing neural control. There are applied the following control schemes: an indirect and a direct trajectory tracking control, using the state and parameter information, given by an identification recurrent neural network. The applicability of the proposed control schemes is conf...
متن کاملIncremental Evolution of Trainable Neural Networks that are Backwards Compatible
Supervised learning has long been used to modify the artificial neural network in order to perform classification tasks. However, the standard fullyconnected layered design is often inadequate when performing such tasks. We demonstrate that evolution can be used to design an artificial neural network that learns faster and more accurately. By evolving artificial neural networks within a dynamic...
متن کاملMinimalRNN: Toward More Interpretable and Trainable Recurrent Neural Networks
We introduce MinimalRNN, a new recurrent neural network architecture that achieves comparable performance as the popular gated RNNs with a simplified structure. It employs minimal updates within RNN, which not only leads to efficient learning and testing but more importantly better interpretability and trainability. We demonstrate that by endorsing the more restrictive update rule, MinimalRNN l...
متن کاملRecurrent neural networks with trainable amplitude of activation functions
An adaptive amplitude real time recurrent learning (AARTRL) algorithm for fully connected recurrent neural networks (RNNs) employed as nonlinear adaptive filters is proposed. Such an algorithm is beneficial when dealing with signals that have rich and unknown dynamical characteristics. Following the approach from, three different cases for the algorithm are considered; a common adaptive amplitu...
متن کاملTrainable Greedy Decoding for Neural Machine Translation
Recent research in neural machine translation has largely focused on two aspects; neural network architectures and end-toend learning algorithms. The problem of decoding, however, has received relatively little attention from the research community. In this paper, we solely focus on the problem of decoding given a trained neural machine translation model. Instead of trying to build a new decodi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the AAAI Conference on Artificial Intelligence
سال: 2020
ISSN: 2374-3468,2159-5399
DOI: 10.1609/aaai.v34i04.5872